MIND: Maximum Mutual Information Based Neural Decoder
نویسندگان
چکیده
We are assisting at a growing interest in the development of learning architectures with application to digital communication systems. Herein, we consider detection/decoding problem. aim developing an optimal neural architecture for such task. The definition criterion is fundamental step. propose use mutual information (MI) channel input-output signal pair, which yields minimization a-posteriori transmitted codeword given output observation. computation formidable task, and majority channels it unknown. Therefore, has be learned. For objective, novel estimator based on discriminative formulation. This leads derivation decoder (MIND). developed capable not only solve decoding problem unknown channels, but also return estimate average MI achieved coding scheme, as well error probability. Several numerical results reported compared maximum likelihood strategies.
منابع مشابه
Maximum mutual information regularized classification
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncert...
متن کاملNonlinear Feature Transforms Using Maximum Mutual Information
Finding the right features is an essential part of a pattern recognition system. This can be accomplished either by selection or by a transform from a larger number of “raw” features. In this work we learn non-linear dimension reducing discriminative transforms that are implemented as neural networks, either as radial basis function networks or as multilayer perceptrons. As the criterion, we us...
متن کاملMaximum Mutual Information and Word Classes
Herein, I present some notes concerning implementation of now classical method of data clustering, called Maximum Mutual Information Clustering. It was introduced in [Mercer et al., 1992] in context of language modeling. The original article contained some cues concerning its implementation. These are carried out in detail here, together with some new tricks. Results of the test run on 110M wor...
متن کاملEmploying Maximum Mutual Information for Bayesian Classification
In order to employ machine learning in realistic clinical settings we are in need of algorithms which show robust performance, producing results that are intelligible to the physician. In this article, we present a new Bayesian-network learning algorithm which can be deployed as a tool for learning Bayesian networks, aimed at supporting the processes of prognosis or diagnosis. It is based on a ...
متن کاملMine: Mutual Information Neural Estimation
We argue that the estimation of the mutual information between high dimensional continuous random variables is achievable by gradient descent over neural networks. This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size. MINE is backpropable and we prove that it is strongly consistent. We illustrate a handful of appl...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Communications Letters
سال: 2022
ISSN: ['1558-2558', '1089-7798', '2373-7891']
DOI: https://doi.org/10.1109/lcomm.2022.3207379